对于适当的统计估计,数据集中的偏差可能非常有害。为了应对这个问题,已经开发了重要的加权方法,以将任何有偏分的分布与其相应的目标无偏分布相匹配。如今,开创性内核平均匹配(KMM)方法仍然被认为是该研究领域的最新技术。但是,该方法的主要缺点之一是大型数据集的计算负担。基于Huang等人的先前作品。 (2007)和De Mathelin等。 (2021),我们得出了一种新颖的重要性加权算法,该算法通过使用神经网络预测实例权重来扩展到大型数据集。我们在多个公共数据集上显示,在各种样本偏见下,我们提出的方法大大减少了大数据集上的计算时间,同时与其他重要的加权方法相比,保持了相似的样本偏差校正性能。所提出的方法似乎是唯一能够在合理时间内使用多达200万个数据的大型数据集进行相关重新加权的方法。
translated by 谷歌翻译
今天机器学习的普遍的范式是使用过去的观察来预测未来的观点。但是,如果我们有兴趣了解现在的过去?这种情况确实是天文学家必须经常争辩。要了解我们宇宙的形成,我们必须导出星系的可见质量含量的时间演变。然而,要观察完整的明星生活,人们需要等待十亿年!为了克服这种困难,天体物理学家利用超级计算机来利用和发展的星系模拟模型,直到宇宙的当前年龄,从而建立观察到的辐射和星形成历史(SFHS)之间的映射。这种地面真理SFHS缺乏实际的星系观察,通常推断出 - 使用贝叶斯拟合方法从光谱能量分布(SED)的信心差。在这次调查中,我们讨论了无监督域适应的能力,以获得具有模拟数据的星系的准确SFH,作为开发最终应用于观察数据的技术的必要的第一步。
translated by 谷歌翻译
设计具有所需特性的新工业材料可能非常昂贵且耗时。主要困难是生成对应于现实材料的化合物。实际上,作为组分的载体的化合物的描述的表征是通过离散特征和严重的稀疏性表征。此外,传统的生成模型验证过程作为视觉验证,FID和开始分数是针对图像量身定制的,然后不能在此上下文中使用。为了解决这些问题,我们开发了一种致力于产生高稀疏性的离散数据集的原始绑定-VAE模型。我们通过适应化合物生成问题的新型度量来验证模型。我们展示了橡胶复合设计的真正问题,即所提出的方法优于标准生成模型,该模型开启了用于材料设计优化的新视角。
translated by 谷歌翻译
本文的目的是设计主动学习策略,从而在Lipschitz函数的假设下导致领域适应。以Mansour等人的先前作品为基础。(2009年)我们调整了源和目标分布之间的差异距离的概念,以将假设类别的最大化限制为在源域上执行准确标记的局部函数类别的最大化。我们根据Rademacher平均值和满足规律性条件的一般损失函数的局部差异来得出此类主动学习策略的概括误差界限。可以从理论界限推断出可以解决大数据集情况的实用k-媒体算法。我们的数值实验表明,在域适应性的背景下,所提出的算法与其他最先进的活跃学习技术具有竞争力,尤其是在大约十万张图像的大数据集上。
translated by 谷歌翻译
Over the past decade, neural networks have been successful at making predictions from biological sequences, especially in the context of regulatory genomics. As in other fields of deep learning, tools have been devised to extract features such as sequence motifs that can explain the predictions made by a trained network. Here we intend to go beyond explainable machine learning and introduce SEISM, a selective inference procedure to test the association between these extracted features and the predicted phenotype. In particular, we discuss how training a one-layer convolutional network is formally equivalent to selecting motifs maximizing some association score. We adapt existing sampling-based selective inference procedures by quantizing this selection over an infinite set to a large but finite grid. Finally, we show that sampling under a specific choice of parameters is sufficient to characterize the composite null hypothesis typically used for selective inference-a result that goes well beyond our particular framework. We illustrate the behavior of our method in terms of calibration, power and speed and discuss its power/speed trade-off with a simpler data-split strategy. SEISM paves the way to an easier analysis of neural networks used in regulatory genomics, and to more powerful methods for genome wide association studies (GWAS).
translated by 谷歌翻译
Objective: Accurate visual classification of bladder tissue during Trans-Urethral Resection of Bladder Tumor (TURBT) procedures is essential to improve early cancer diagnosis and treatment. During TURBT interventions, White Light Imaging (WLI) and Narrow Band Imaging (NBI) techniques are used for lesion detection. Each imaging technique provides diverse visual information that allows clinicians to identify and classify cancerous lesions. Computer vision methods that use both imaging techniques could improve endoscopic diagnosis. We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i.e. there is no exact equivalent for every image in both NBI and WLI domains. Method: We propose a semi-surprised Generative Adversarial Network (GAN)-based method composed of three main components: a teacher network trained on the labeled WLI data; a cycle-consistency GAN to perform unpaired image-to-image translation, and a multi-input student network. To ensure the quality of the synthetic images generated by the proposed GAN we perform a detailed quantitative, and qualitative analysis with the help of specialists. Conclusion: The overall average classification accuracy, precision, and recall obtained with the proposed method for tissue classification are 0.90, 0.88, and 0.89 respectively, while the same metrics obtained in the unlabeled domain (NBI) are 0.92, 0.64, and 0.94 respectively. The quality of the generated images is reliable enough to deceive specialists. Significance: This study shows the potential of using semi-supervised GAN-based classification to improve bladder tissue classification when annotations are limited in multi-domain data.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Turning the weights to zero when training a neural network helps in reducing the computational complexity at inference. To progressively increase the sparsity ratio in the network without causing sharp weight discontinuities during training, our work combines soft-thresholding and straight-through gradient estimation to update the raw, i.e. non-thresholded, version of zeroed weights. Our method, named ST-3 for straight-through/soft-thresholding/sparse-training, obtains SoA results, both in terms of accuracy/sparsity and accuracy/FLOPS trade-offs, when progressively increasing the sparsity ratio in a single training cycle. In particular, despite its simplicity, ST-3 favorably compares to the most recent methods, adopting differentiable formulations or bio-inspired neuroregeneration principles. This suggests that the key ingredients for effective sparsification primarily lie in the ability to give the weights the freedom to evolve smoothly across the zero state while progressively increasing the sparsity ratio. Source code and weights available at https://github.com/vanderschuea/stthree
translated by 谷歌翻译
Although query-based systems (QBS) have become one of the main solutions to share data anonymously, building QBSes that robustly protect the privacy of individuals contributing to the dataset is a hard problem. Theoretical solutions relying on differential privacy guarantees are difficult to implement correctly with reasonable accuracy, while ad-hoc solutions might contain unknown vulnerabilities. Evaluating the privacy provided by QBSes must thus be done by evaluating the accuracy of a wide range of privacy attacks. However, existing attacks require time and expertise to develop, need to be manually tailored to the specific systems attacked, and are limited in scope. In this paper, we develop QuerySnout (QS), the first method to automatically discover vulnerabilities in QBSes. QS takes as input a target record and the QBS as a black box, analyzes its behavior on one or more datasets, and outputs a multiset of queries together with a rule to combine answers to them in order to reveal the sensitive attribute of the target record. QS uses evolutionary search techniques based on a novel mutation operator to find a multiset of queries susceptible to lead to an attack, and a machine learning classifier to infer the sensitive attribute from answers to the queries selected. We showcase the versatility of QS by applying it to two attack scenarios, three real-world datasets, and a variety of protection mechanisms. We show the attacks found by QS to consistently equate or outperform, sometimes by a large margin, the best attacks from the literature. We finally show how QS can be extended to QBSes that require a budget, and apply QS to a simple QBS based on the Laplace mechanism. Taken together, our results show how powerful and accurate attacks against QBSes can already be found by an automated system, allowing for highly complex QBSes to be automatically tested "at the pressing of a button".
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译